7 research outputs found

    Design and Evaluation of 3D Positioning Techniques for Multi-touch Displays

    Get PDF
    Multi-touch displays represent a promising technology for the display and manipulation of 3D data. To fully exploit their capabilities, appropriate interaction techniques must be designed. In this paper, we explore the design of free 3D positioning techniques for multi-touch displays to exploit the additional degrees of freedom provided by this technology. We present a first interaction technique to extend the standard four viewports technique found in commercial CAD applications and a second technique designed to allow free 3D positioning with a single view of the scene. The two techniques were then evaluated in a controlled experiment. Results show no statistical difference for the positioning time but a clear preference for the Z-technique

    Design and evaluation of fusion approach for combining brain and gaze inputs for target selection.

    No full text
    Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human-computer interaction. In this paper, we investigate the combination of gaze and brain-computer interfaces. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain-computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces

    Detection of signs of Parkinson's disease using dynamical features via an indirect pointing device

    Get PDF
    In this paper, we study the problem of detecting early signs of Parkinson's disease during an indirect human-computer interaction via a computer mouse activated by a user. The experimental setup provides a signal determined by the screen pointer position. An appropriate choice of segments in the cursor position raw data provides a filtered signal from which a number of quantifiable criteria can be obtained. These dynamical features are derived based on control theory methods. Thanks to these indicators, a subsequent analysis allows the detection of users with tremor. Real-life data from patients with Parkinson's and healthy controls are used to illustrate our detection method

    System for colocating a touch screen and a virtual object, and device for manipulating virtual objects implementing such a system

    No full text
    The invention relates to a system (10) for displaying at least one virtual object comprising a secondary screen (20) for displaying the virtual object, a primary screen (30), an optical means for overlaying images displayed on the secondary screen (20) with images displayed on the primary screen (30), and a pointing surface combined with the primary screen (30) for detecting the contact of one or more physical pointing elements. A device (90) for manipulating at least one virtual object comprises calculation means for generating images of the virtual object displayed on the system (10) from information output from the system (10) in accordance with the actions of the operator (100)

    System for colocating a touch screen and a virtual object, and device for manipulating virtual objects implementing such a system

    No full text
    The invention relates to a system (10) for displaying at least one virtual object comprising a secondary screen (20) for displaying the virtual object, a primary screen (30), an optical means for overlaying images displayed on the secondary screen (20) with images displayed on the primary screen (30), and a pointing surface combined with the primary screen (30) for detecting the contact of one or more physical pointing elements. A device (90) for manipulating at least one virtual object comprises calculation means for generating images of the virtual object displayed on the system (10) from information output from the system (10) in accordance with the actions of the operator (100)

    Towards BCI-Based Interfaces for Augmented Reality: Feasibility, Design and Evaluation

    No full text
    Brain-Computer Interfaces (BCIs) enable users to interact with computers without any dedicated movement, bringing new hands-free interaction paradigms. In this paper we study the combination of BCI and Augmented Reality (AR). We first tested the feasibility of using BCI in AR settings based on Optical See-Through Head-Mounted Displays (OST-HMDs). Experimental results showed that a BCI and an OST-HMD equipment (EEG headset and Hololens in our case) are well compatible and that small movements of the head can be tolerated when using the BCI. Second, we introduced a design space for command display strategies based on BCI in AR, when exploiting a famous brain pattern called Steady-State Visually Evoked Potential (SSVEP). Our design space relies on five dimensions concerning the visual layout of the BCI menu; namely: orientation, frame-of-reference, anchorage, size and explicitness. We implemented various BCI-based display strategies and tested them within the context of mobile robot control in AR. Our findings were finally integrated within an operational prototype based on a real mobile robot that is controlled in AR using a BCI and a HoloLens headset. Taken together our results (4 user studies) and our methodology could pave the way to future interaction schemes in Augmented Reality exploiting 3D User Interfaces based on brain activity and BCIs
    corecore